AI Is Moving Fast. Child Protection Isn’t. Not a Future Risk—but a Rapidly Escalating Crisis

It is a new year—one that, more than ever, begins under the sign of rapidly accelerating AI.

At the end of 2025, I visited Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI). I had been invited by Riana Pfefferkorn, who researches the law and policy implications of emerging technologies, including AI.

My research focuses on children and young people, and I am particularly concerned with understanding how their rights can best be protected in the face of these developments. At Stanford HAI, one of the Center’s core efforts is to track, map, and synthesize developments across the rapidly evolving field of artificial intelligence. A key outcome of this work is the AI Index Report 2025—a comprehensive account of how artificial intelligence is accelerating across society, the economy, and governance.

The report shows just how fast things are moving. New, more demanding benchmarks such as MMMU, GPQA, and SWE-bench saw dramatic year-over-year improvements, with performance gains of up to 67 percentage points in a single year. At the same time, AI systems are becoming cheaper, more accessible, and increasingly realistic.

This is especially visible in video generation. Models like Sora and Veo 2 represent a clear leap over 2023 systems, producing highly realistic, cinematic content at a speed and scale that was unthinkable just a year ago.

AI child sexual abuse imagery is not a future risk – it is a current and accelerating crisis
— Internet Watch Foundation CEO Kerry Smith

AI has supercharged the already endless flow of visual information we scroll through every day on social media and other visual, networked technologies. Images and videos are no longer just shared; they are generated, scaled, and optimized by machines.

Over five billion people now collectively produce an estimated 2.5 quintillion bytes of data every day. The volume of data has increased by 90% in just two years (U.S. Chamber of Commerce Foundation, 2023). This acceleration of visual production raises urgent questions about how children and young people are encountered, represented, and protectedwithin digital culture.

As Riana Pfefferkorn has warned, “computer-generated child sex abuse imagery poses significant challenges to law enforcement, including constitutional limits on criminal prosecutions” (Pfefferkorn, 2024). Advances in generative AI are making it increasingly feasible to create highly realistic child sexual abuse imagery, exposing serious gaps in existing legal and regulatory frameworks.

This is a regulatory blind spot. The Internet Watch Foundation (IWF) has warned that AI-generated child sexual abuse material is “not a future risk—it is a current and accelerating crisis.” In 2024, the IWF recorded 245 reports containing actionable AI-generated child sexual abuse imagery, compared to just 51 reports in 2023—a 380% increase in a single year. These reports alone included 7,644 images and a growing number of videos.

So where can we find hope as we enter a new year?

Perhaps in the growing recognition that this is no longer a debate about abstract trade-offs between innovation, privacy, and safety. As the Internet Watch Foundation has argued, the real choice—particularly in Europe—is not between privacy and protection, but between indifference and compassion.

Hope lies in the fact that the harms are now documented, the grey zones exposed, and the technical and legal arguments more precise. We know far more than we did just a few years ago. We know that children’s rights are violated not only when abuse occurs, but when it is recorded, replicated, and endlessly circulated. And we know that inaction is itself a political choice.

Protecting children is not in opposition to freedom or innovation. It is a precondition for both.


Key Takeaways Pfefferkorn’s policy brief.

Schools are largely unprepared to address the risks of AI-generated child sexual abuse material (CSAM), including the use of so-called “nudify” apps and the circulation of deepfake nudes among students. Few schools educate students about these risks or train educators to respond effectively when incidents occur.

Recent criminalization of AI-generated CSAM is insufficient on its own. While many states have updated criminal law, most have failed to provide clear guidance for how schools should handle cases where minors themselves create or share such material.

Schools need clearer legal and policy frameworks. States should update mandated reporting and school discipline policies to clarify when educators are required to report deepfake nude incidents and to explicitly recognize such behavior as a form of cyberbullying or sexual harm.

Punitive approaches are often inappropriate for minors. Responses to student-on-student AI-generated CSAM should prioritize behavioral and educational interventions over criminal punishment, grounded in child development, trauma-informed practices, and principles of educational equity.


References:

IWF - Internet Watch Foundation

Pfefferkorn, R. (2024). Addressing computer-generated child sex abuse imagery: Legal framework and policy implications. Lawfare Institute, in cooperation with Brookings. February 5, 2024.

Pfefferkorn, R., Grossman, S., & Liu, S. (2025). AI-generated child sexual abuse material: Insights from educators, platforms, law enforcement, legislators, and victims. Stanford Digital Repository. https://purl.stanford.edu/mn692xc5736

Stanford Institute for Human-Centered Artificial Intelligence (HAI). (2025). AI Index Report 2025. Stanford University.





Previous
Previous

“If we want to win big with teens, we must bring them in as tweens,” according to internal Meta (Instagram) documents from 2018 disclosed in court today in Los Angeles.